Goto

Collaborating Authors

 tight robustness verification


A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks

Neural Information Processing Systems

Verification of neural networks enables us to gauge their robustness against adversarial attacks. Verification algorithms fall into two categories: exact verifiers that run in exponential time and relaxed verifiers that are efficient but incomplete. In this paper, we unify all existing LP-relaxed verifiers, to the best of our knowledge, under a general convex relaxation framework. This framework works for neural networks with diverse architectures and nonlinearities and covers both primal and dual views of neural network verification. Next, we perform large-scale experiments, amounting to more than 22 CPU-years, to obtain exact solution to the convex-relaxed problem that is optimal within our framework for ReLU networks. We find the exact solution does not significantly improve upon the gap between PGD and existing relaxed verifiers for various networks trained normally or robustly on MNIST and CIFAR datasets. Our results suggest there is an inherent barrier to tight verification for the large class of methods captured by our framework. We discuss possible causes of this barrier and potential future directions for bypassing it.


Reviews: A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks

Neural Information Processing Systems

The paper targets the problem of robustness verification of neural networks. This is a very popular and important problem. One of the prominent ways to deal with it is by formulating it as a nonlinear optimization problem and then relaxing its constraints to form a linear program relaxation. These relaxations are not guaranteed to return the optimal value, but they can be solved in polynomial time and provide bounds on the optimal solution. The main contributions of the paper are as follows: 1. Proposing a unified framework that generalizes all known layerwise LP relaxations, and showing their relationship (i.e., which relaxation is tighter).


Reviews: A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks

Neural Information Processing Systems

The paper proposes a general framework for layer-wise LP relaxations and shows which relaxation is tighter. Further it shows that there is a theoretical barrier to layer-wise LP relaxations.


A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks

Neural Information Processing Systems

Verification of neural networks enables us to gauge their robustness against adversarial attacks. Verification algorithms fall into two categories: exact verifiers that run in exponential time and relaxed verifiers that are efficient but incomplete. In this paper, we unify all existing LP-relaxed verifiers, to the best of our knowledge, under a general convex relaxation framework. This framework works for neural networks with diverse architectures and nonlinearities and covers both primal and dual views of neural network verification. Next, we perform large-scale experiments, amounting to more than 22 CPU-years, to obtain exact solution to the convex-relaxed problem that is optimal within our framework for ReLU networks.


A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks

Salman, Hadi, Yang, Greg, Zhang, Huan, Hsieh, Cho-Jui, Zhang, Pengchuan

Neural Information Processing Systems

Verification of neural networks enables us to gauge their robustness against adversarial attacks. Verification algorithms fall into two categories: exact verifiers that run in exponential time and relaxed verifiers that are efficient but incomplete. In this paper, we unify all existing LP-relaxed verifiers, to the best of our knowledge, under a general convex relaxation framework. This framework works for neural networks with diverse architectures and nonlinearities and covers both primal and dual views of neural network verification. Next, we perform large-scale experiments, amounting to more than 22 CPU-years, to obtain exact solution to the convex-relaxed problem that is optimal within our framework for ReLU networks.